Statistical Mechanics of Adaptive Weight Perturbation Learning

نویسندگان

  • Ryosuke Miyoshi
  • Yutaka Maeda
  • Seiji Miyoshi
چکیده

The weight perturbation learning was proposed as a learning rule which adds perturbation to the variable parameters of learning machines. Generalization performance of the weight perturbation learning was analyzed by statistical mechanical methods. The weight perturbation learning has the same asymptotic generalization property as the Perceptron learning. In this paper we consider difference between the Perceptron learning and the AdaTron learning which are well-known learning rules. Applying the consideration to the weight perturbation learning, we propose the adaptive weight perturbation learning. The generalization performance of the proposed rule is analyzed by statistical mechanical methods. Consequently, it is shown that the proposed learning rule has an outstanding asymptotic property corresponding to the AdaTron learning.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks

Gradient-following learning methods can encounter problems of implementation in many applications, and stochastic variants are sometimes used to overcome these difficulties. We analyze three online training methods used with a linear perceptron: direct gradient descent, node perturbation, and weight perturbation. Learning speed is defined as the rate of exponential decay in the learning curves....

متن کامل

Weight Sensitivity and Interdependence 1 Characterizing Network Complexity and Classification Efficiency by the Ratio of Weight Interdependence to Sensitivity

We extend previous research on digital filter structures and parameter sensitivity to the relationship between the nature of hidden-unit activation function, weight sensitivity and interdependence, and classification learning in neural networks. Weight sensitivity indicates the extent of variations in a network's output when reacting to small perturbations in its weights; whereas weight interde...

متن کامل

Statistical Mechanics of Learning

We review the application of statistical mechanics methods to the study of online learning of a drifting concept in the limit of large systems. The model where a feed-forward network learns from examples generated by a time dependent teacher of the same architecture is analyzed. The best possible generalization ability is determined exactly, through the use of a variational method. The construc...

متن کامل

Nonextensive statistical mechanics for hybrid learning of neural networks

This paper introduces a new hybrid approach for learning systems that builds on the theory of nonextensive statistical mechanics. The proposed learning scheme uses only the sign of the gradient, and combines adaptive stepsize local searches with global search steps that make use of an annealing schedule inspired from nonextensive statistics, as proposed by Tsallis. The performance of the hybrid...

متن کامل

Statistical Mechanics of Node-perturbation Learning with Noisy Baseline

Node–perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEICE Transactions

دوره 94-D  شماره 

صفحات  -

تاریخ انتشار 2011